Embodied Whole Body Movement in VR


Home

 

TITLE: Embodied Whole Body Movement in VR

How to find your way through Cyberspace?

Designing, building, and evaluating novel, embodied human-computer interfaces for more natural, effective, and enjoyable locomotion through virtual environments

Name of researcher

Bernhard Riecke

URL of researcher's main body of work

http://www.siat.sfu.ca/faculty/Bernhard-Riecke

http://www.kyb.mpg.de/publication.html?user=bernie

http://www.kyb.mpg.de/projects.html?user=bernie&compl=1

http://www.kyb.mpg.de/projects.html?prj=142&user=bernie&compl=1

http://www.kyb.mpg.de/bu/people/bernie/projectDescription.html

 

Lab or location where project development will occur (if applicable)

Grey Box & VR Lab (maybe also black box)

 

Short Description of Project.

Are you interested in

  • exciting state-of-the-art research,
  • designing/building more effective and embodied human-computer interfaces, that
  • allow us to navigate and spatially orient more naturally and effectively in Virtual Reality and other computer-mediate spaces
  • evaluating those designs and
  • presenting/publishing the results in internationally renown conferences/journals?

Well, if so, this might be the right project for you ;-)

 

Skills required for project

What I’m looking for in the team members:

  • Strong interest in scientific research, trying to really understand things on a deeper level. In particular, it would be great if you could get excited about questions like: Why do we get lost so easily in VR and other multi-media spaces (computer games are only one example…)? Why not in the real world? How could we learn from that and explore/design/construct cool, novel human-computer interfaces that are more effective yet elegant? That is, how can we use current technology to empower us to interact more naturally, effectively, and joyfully with computer-mediated environments? And how can we quantify the results and present them in a clear, scientific manner?

A team of 4-6 people that spans a variety of skill sets would be great. In particular, it would be best if the team includes

  • >2 people w/ scientific interest and some background/interest in scientific thinking, presentation, writing…
  •    At least 1 or 2 people with good programming skills to design and develop the  software side of the interfaces & experiments. A possible platform for running the user studies could be (but doesn’t have to be) Vizard from worldViz (python interface for 3D Virtual Reality).
  •    At least one person with some interest/background in experimental design, statistics and data analysis would be helpful (else I might be able to provide that myself or teach you).
  • >1 people with experience in Human Computer Interface/Interaction design (oh well, I guess this is self-understood at SIAT, I just list it here for the sake of completeness)
  • At least 1 or 2 people with good practical skills for design/construction/building of actual setups, interfaces etc. to be used in the experiments.

Some more info:

The proposed project includes designing, building, and program interfacing of a (multi-modal?) interface for more effective spatial orientation & navigation through computer-mediated environments and spaces. This could include embodied and whole-body motion interfaces/interaction paradigms like moveable chairs (modified hammock chair?), gestures, leaning motions (using, e.g., a Wii Balance Board?), actual/treadmill walking, dancing, etc. I’m open to any suggestion that seems exciting, doable, and has some chance of working ;-)

In terms of evaluation/experiment, we could use simple yet elegant spatial orientation experimental paradigms that I have a lot of experience with. Alternatively (and probably more interestingly), the project could be integrated into a computer game if the participants like and have sufficient skills in that area (this is not a must, though). Also, the evaluation study could be designed to have a game-like character to make it more natural and enjoyable.

Venues for your research (include conferences, presentation and publication venues, etc)

  • Apart from project demos at the end of the term, it would be a great opportunity to run well-design evaluation studies/experiment and submit these to international journals and/or conferences.
  • Potential conferences/venues include for example
    • SigChi 2009 — April 5 - 9, Boston, USA; http://www.sigchi.org/conferences/
    • APGV 2009 apgv.org
    • IEEE VR 2009
    • ACM Symposium on User Interface Software and Technology http://www.acm.org/uist (paper deadline in April?)
    • http://www.sigchi.org/conferences/calendarofevents.html

----------

 

GOAL:

The overall goal of my research endeavor is to investigate what constitutes effective, robust, and intuitive human spatial orientation and behavior, and employ this knowledge to design human-computer interfaces that enable similar processes in computer-mediated environments like virtual reality (VR) and multi-media.

MOTIVATION:

Due to the impressive technological and scientific progress of the last few decades, we now have computers available with unprecedented computational power, equipped with advanced human computer interfaces - even VR software and hardware are becoming increasingly available and affordable. Despite these technological advances, we frequently get disoriented when navigating virtual environments because the supported spatial behavior is still clumsy and unnatural. In order to improve our simulations of virtual reality to a level where perceptual mirages can transform our sense of truth by mimicking reality so vividly that they enable natural (i.e., effortless yet effective) behavior, this research program will explore underlying perceptual/cognitive processes.

APPROACH:

My goal is to enable natural and effective human spatial performance in computer-mediated environments. I propose to perform fundamental research (that builds on but goes far beyond my previous research) to investigate the determining factors and their interactions. These include visual, auditory, biomechanical, and kinesthetic cues as well as multi-modal and higher-level contributions and interactions. To date, research (including my own), has separated self-motion perception and spatial orientation into two separate fields. I posit that overcoming this separation will not only foster a deeper understanding of human perception and behavior, but ultimately enable us to design more effective yet affordable human-computer interfaces that empower us rather than restrain us. I have extensive experience in both fields, and plan to combine these with my background in self-motion simulation and human interface design in one comprehensive and ambitious research program. In the real world, robust and effortless spatial orientation critically relies on so-called "automatic/obligatory spatial updating", a largely automatized and reflex-like process that transforms our mental egocentric representation of the immediate surroundings during self-motions. Yet, even state-of-the-art VR setups often do not enable this highly efficient spatial updating mechanism, leading to disorientation and ultimately to reduced usability, performance, and user acceptance. While real-world locomotion is naturally accompanied by compelling sensations of self-motion, simulated locomotion in VR is typically not. As a working hypothesis, I propose that investigating and providing a compelling, embodied sensation of (illusory) self-motion in VR is essential for empowering humans to orient and behave naturally and effectively in VR. My experimental paradigms will include self-motion perception, navigation/spatial orientation, and spatial perception/cognition paradigms based on my previous work (e.g., homing, rapid pointing to no-longer-visible landmarks, or judgments of relative direction).

EXPECTED BENEFITS:

The research will lead to a deeper understanding of human perception and behavior that enables us to design more effective human-computer interfaces and interaction metaphors. In particular, applied perspective research will identify the essential parameters of perception/action and pin-point the "blind spots" that will enable us to trick the brain when simulating virtual reality. This will enable us to create better cost-effective virtual solutions for numerous applications such as driving/flight simulation and space exploration, rehabilitation, education, engineering, recreation, emergency training, minimally invasive surgery, telemedicine and other tele-presence applications.